The article introduces Antislop, a framework designed to identify and eliminate repetitive patterns, known as "slop," in language models that detract from text quality. It features innovations such as the Antislop Sampler for suppressing unwanted strings, an automated profiling pipeline for training data generation, and a fine-tuning method called Final Token Preference Optimization (FTPO), which demonstrates significant reductions in slop while maintaining or enhancing model performance across various evaluation tasks.